7 research outputs found

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Speaky for robots: the development of vocal interfaces for robotic applications

    No full text
    The currently available speech technologies on mobile devices achieve effective performance in terms of both reliability and the language they are able to capture. The availability of performant speech recognition engines may also support the deployment of vocal interfaces in consumer robots. However, the design and implementation of such interfaces still requires significant work. The language processing chain and the domain knowledge must be built for the specific features of the robotic platform, the deployment environment and the tasks to be performed. Hence, such interfaces are currently built in a completely ad hoc way. In this paper, we present a design methodology together with a support tool aiming to streamline and improve the implementation of dedicated vocal interfaces for robots. This work was developed within an experimental project called Speaky for Robots. We extend the existing vocal interface development framework to target robotic applications. The proposed solution is built using a bottom-up approach by refining the language processing chain through the development of vocal interfaces for different robotic platforms and domains. The proposed approach is validated both in experiments involving several research prototypes and in tests involving end-users
    corecore